Code generation models have achieved impressive performance. However, they tend to be brittle as slight edits to a prompt could lead to very different generations; these robustness properties, critical for user experience when deployed in real-life applications, are not well understood. Most existing works on robustness in text or code tasks have focused on classification, while robustness in generation tasks is an uncharted area and to date there is no comprehensive benchmark for robustness in code generation. In this paper, we propose ReCode, a comprehensive robustness evaluation benchmark for code generation models. We customize over 30 transformations specifically for code on docstrings, function and variable names, code syntax, and code format. They are carefully designed to be natural in real-life coding practice, preserve the original semantic meaning, and thus provide multifaceted assessments of a model's robustness performance. With human annotators, we verified that over 90% of the perturbed prompts do not alter the semantic meaning of the original prompt. In addition, we define robustness metrics for code generation models considering the worst-case behavior under each type of perturbation, taking advantage of the fact that executing the generated code can serve as objective evaluation. We demonstrate ReCode on SOTA models using HumanEval, MBPP, as well as function completion tasks derived from them. Interesting observations include: better robustness for CodeGen over InCoder and GPT-J; models are most sensitive to syntax perturbations; more challenging robustness evaluation on MBPP over HumanEval.
translated by 谷歌翻译
Incorporating contrastive learning objectives in sentence representation learning (SRL) has yielded significant improvements on many sentence-level NLP tasks. However, It is not well understood why contrastive learning works for learning sentence-level semantics. In this paper, we take a closer look at contrastive sentence representation learning through the lens of isotropy and learning dynamics. We interpret its success stories through the geometry of the representation shifts. We show that contrastive learning brings isotropy, and surprisingly learns to converge tokens to similar positions in the semantic space if given the signal that they are in the same sentence. Also, what we formalize as "spurious contextualization" is mitigated for semantically meaningful tokens, while augmented for functional ones. The embedding space is pushed toward the origin during training, with more areas now better defined. We ablate these findings by observing the learning dynamic with different training temperatures, batch sizes and pooling methods. With these findings, we aim to shed light on future designs of sentence representation learning methods.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
在逻辑合成阶段,需要将合成工具中的结构转换组合为优化序列,并在电路上作用以满足指定的电路区域和延迟。但是,逻辑合成优化序列是耗时的运行时间,并预测结果(QOR)与电路的合成优化序列的质量(QOR)可以帮助工程师更快地找到更好的优化序列。在这项工作中,我们提出了一种深度学习方法,以预测看不见的电路优化序列对的QOR。具体而言,结构转换通过嵌入方法和高级自然语言处理(NLP)技术(变压器)转换为向量,以提取优化序列的特征。此外,为了使模型的预测过程从电路到电路进行推广,电路的图表示为邻接矩阵和特征矩阵。图神经网络(GNN)用于提取电路的结构特征。对于此问题,使用了变压器和三个典型的GNN。此外,变压器和GNN被用作未见电路优化序列的QOR预测的联合学习政策。由变压器和GNN组合产生的方法基准测试。实验结果表明,变压器和图形的联合学习可获得最佳结果。预测结果的平均绝对误差(MAE)为0.412。
translated by 谷歌翻译
短期负载预测(STLF)在电力交易市场的运营中起着重要作用。考虑到对数据隐私的日益关注,在最近的研究中,越来越多地采用了联合学习(FL)来培训公用事业公司(UCS)的STLF模型。令人鼓舞的是,在批发市场中,由于发电厂(PPS)直接访问UCS数据并不现实,因此FL绝对是可行的解决方案,可以为PPS获得准确的STLF模型。但是,由于FL的分布性质和UC之间的激烈竞争,缺陷越来越多,导致STLF模型的性能差,表明仅采用FL是不够的。在本文中,我们提出了一种DRL辅助方法,缺陷感知的联合软性参与者 - 批评者(DearFSAC),以稳健地训练PPS的准确的STLF模型,以预测精确的短期公用事业需求。首先。我们仅使用历史负载数据和时间数据设计了基于长期短期内存(LSTM)的STLF模型。此外,考虑到缺陷发生的不确定性,采用了深入的增强学习(DRL)算法来通过减轻缺陷引起的模型退化来协助FL。此外,为了更快的FL训练融合,自动编码器设计用于缩小尺寸和上载模型的质量评估。在模拟中,我们在2019年验证了赫尔辛基UCS的真实数据的方法。结果表明,无论是否发生缺陷,DearFSAC都比所有其他方法都胜过所有其他方法。
translated by 谷歌翻译
我们提出了一种建模不规则间隔的离散事件序列的方法。我们从变压器的连续时间变型开始,最初制定(Vaswani等,2017)用于没有时间戳的序列。我们在时间$ T $嵌入可能的事件(或其他布尔事实)通过注意在时间$ <T $(以及它们发生时为真实的事实)的事件上。我们使用模式匹配的逻辑规则来控制此关注,这些规则与共享与会者的事件和事实相关。这些规则确定将参加哪些先前的事件,以及如何将事件和事实的嵌入式转换为注意力查询,键和值。其他逻辑规则描述了如何以响应事件更改集事集。我们的方法密切关注Mei等人。 (2020A),并通过时间形式主义进行逻辑规则的时间正式主义。与那样一样,域专家首先写一组逻辑规则,每个逻辑规则在每次$ t $时都建立一个可能的事件和其他事实。每个可能的事件或其他事实都是使用从建立它的规则派生的神经结构嵌入。我们与Mei等人的唯一区别。 (2020A)是,我们得出了一个更平坦的关注的神经结构,而他们使用了更多的串行LSTM架构。我们发现我们的注意力的方法在Robocup数据集中表现得同样良好,逻辑规则在提高性能方面发挥着重要作用。我们还将这两种方法与两种以前的基于关注的方法进行了比较(Zuo等,2020; Zhang等,2020A),在没有逻辑规则的情况下更简单的合成和真实域,并发现我们所提出的方法至少是好的,而有时比其他三种方法中的每一种更好。
translated by 谷歌翻译
多智能体增强学习任务对培训样本的体积提出了很高的需求。不同于其单代理对应物,基于分布式的超代理强化学习面临着苛刻的数据传输,流程间通信管理和勘探高要求的独特挑战。我们提出了一个容器化的学习框架来解决这些问题。我们打包了几个环境实例,本地学习者和缓冲区,以及仔细设计的多队列管理器,避免阻止容器。鼓励每个容器的本地政策尽可能多样,只有最优先考虑的轨迹被送到全球学习者。通过这种方式,我们实现了具有高系统吞吐量的可扩展,较效率和多样化的分布式Marl学习框架。要拥有知识,我们的方法是第一个解决挑战的谷歌研究足球全游戏$ 5 \ _v \ _5 $。在星际争霸II微型管理基准中,与最先进的非分布式MARL算法相比,我们的方法获得了4美元 - $ 18 \倍。
translated by 谷歌翻译
最近,深度多智能经纪增强学习(Marl)已经表明了解决复杂的合作任务的承诺。它的成功部分是因为代理商之间的参数共享。然而,这种共享可能导致代理人行事,并限制其协调能力。在本文中,我们的目标是在共享多智能经纪增强学习的优化和代表中引入多样性。具体而言,我们提出了一种信息理论正则化,以最大限度地提高代理商身份与其轨迹之间的相互信息,鼓励广泛的勘探和各种个性化行为。在表示中,我们将特定于代理的神经网络架构中的特定模块纳入了共享神经网络架构,这些模块由L1-Norm规则化,以促进代理之间的学习共享,同时保持必要的多样性。实证结果表明,我们的方法在谷歌研究足球和超级硬星争II微型管理任务中实现了最先进的性能。
translated by 谷歌翻译
Human organs constantly undergo anatomical changes due to a complex mix of short-term (e.g., heartbeat) and long-term (e.g., aging) factors. Evidently, prior knowledge of these factors will be beneficial when modeling their future state, i.e., via image generation. However, most of the medical image generation tasks only rely on the input from a single image, thus ignoring the sequential dependency even when longitudinal data is available. Sequence-aware deep generative models, where model input is a sequence of ordered and timestamped images, are still underexplored in the medical imaging domain that is featured by several unique challenges: 1) Sequences with various lengths; 2) Missing data or frame, and 3) High dimensionality. To this end, we propose a sequence-aware diffusion model (SADM) for the generation of longitudinal medical images. Recently, diffusion models have shown promising results on high-fidelity image generation. Our method extends this new technique by introducing a sequence-aware transformer as the conditional module in a diffusion model. The novel design enables learning longitudinal dependency even with missing data during training and allows autoregressive generation of a sequence of images during inference. Our extensive experiments on 3D longitudinal medical images demonstrate the effectiveness of SADM compared with baselines and alternative methods.
translated by 谷歌翻译
光学计算是一种新兴技术,用于下一代高效人工智能(AI),其速度和效率超高。电磁场模拟对于光子设备和电路的设计,优化和验证至关重要。但是,昂贵的数值模拟显着阻碍了光子电路设计循环中的可扩展性和转环。最近,已经提出了物理信息的神经网络来预测具有预定义参数的部分微分方程(PDE)的单个实例的光场解。它们复杂的PDE公式和缺乏有效的参数化机制限制了其在实际模拟方案中的灵活性和概括。在这项工作中,首次提出了一个被称为Neurolight的物理敏捷神经操作员框架,以学习一个频率域的麦克斯韦PDE家族,以进行超快速的参数光子设备模拟。我们通过几种新技术来平衡神经照明的效率和概括。具体而言,我们将不同的设备离散到统一域中,代表具有紧凑型波的参数PDE,并通过掩盖的源建模编码入射光。我们使用参数效率高的跨形神经块设计模型,并采用基于叠加的增强来进行数据效率学习。通过这些协同方法,神经亮像可以概括为大量的看不见的模拟设置,比数值求解器显示了2个磁性的模拟速度,并且比先前的神经网络模型优于降低54%的预测误差,而降低了约44%的参数。 。我们的代码可在https://github.com/jeremiemelo/neurolight上找到。
translated by 谷歌翻译